Goto

Collaborating Authors

 predict crime


Tech company boasts its AI can predict crime with social media policing while fighting Meta in court

FOX News

Haywood Talcove, CEO of LexisNexis Risk Solutions' government group, tells Fox News Digital that criminal groups, mostly in other countries, are advertising on social media to market their AI capabilities for fraud and other crimes. A tech company that boasts about its ability to use artificial intelligence to predict crime is in the midst of a privacy lawsuit with Meta, formerly Facebook, that wants it banned from the social media platform. The New York City and Los Angeles police departments, two of the U.S.'s largest police agencies, are among a growing list of law enforcement agencies in the U.S. and around the world to contract with Voyager Labs. In 2018, the New York Police Department agreed to a nearly $9 million deal with Voyager Labs, which claims it can use AI to predict crimes, according to documents obtained by the Surveillance Technology Oversight Project (STOP), The Guardian reported. The company bills itself as a "world leader" in AI-based analytics investigations that can comb through mounds of information from all corners of the internet – including social media and the dark web – to provide insight, uncover potential risks and predict future crimes.


The never-ending quest to predict crime using AI

Washington Post - Technology News

In Chicago, the police used predictive policing software from the Illinois Institute of Technology to create a list of people most likely to be involved in a violent crime. A study from RAND and a subsequent investigation from the Chicago Sun-Times showed that the software included every single person arrested or fingerprinted in Chicago since 2013 on the list. The program was scrapped in 2020.


La veille de la cybersécurité

#artificialintelligence

Scientists are looking for a way to predict crime using, you guessed it, artificial intelligence. There are loads of studies that show using AI to predict crime results in consistently racist outcomes. For instance, one AI crime prediction model that the Chicago Police Department tried out in 2016 tried to get rid of its racist biases but had the opposite effect. It used a model to predict who might be most at risk of being involved in a shooting, but 56% of 20-29 year old Black men in the city appeared on the list. Despite it all, scientists are still trying to use the tool to find out when, and where, crime might occur.


Researchers use AI to predict crime, biased policing in major U.S. cities like L.A.

Los Angeles Times

For once, algorithms that predict crime might be used to uncover bias in policing, instead of reinforcing it. A group of social and data scientists developed a machine learning tool it hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major U.S. cities, including Los Angeles. Instead of justifying more aggressive policing in those areas, however, the hope is the technology will lead to "changes in policy that result in more equitable, need-based resource allocation," including sending officials other than law enforcement to certain kinds of calls, according to a report published Thursday in the journal Nature Human Behavior. The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, forecasts crime by spotting patterns amid vast amounts of public data on property crimes and crimes of violence, learning from the data as it goes.


Researchers are using AI to predict crime, again

#artificialintelligence

Scientists are looking for a way to predict crime using, you guessed it, artificial intelligence. There are loads of studies that show using AI to predict crime results in consistently racist outcomes. For instance, one AI crime prediction model that the Chicago Police Department tried out in 2016 tried to get rid of its racist biases but had the opposite effect. It used a model to predict who might be most at risk of being involved in a shooting, but 56% of 20-29 year old Black men in the city appeared on the list. Despite it all, scientists are still trying to use the tool to find out when, and where, crime might occur.


TechScape: Can AI Really Predict Crime? - AI Summary

#artificialintelligence

The programme used historical crime data like arrests, calls for service, field interview cards – which police filled out with identifying information every time they stopped someone regardless of the reason – and more to map out "problem areas" for officers to focus their efforts on or assign criminal risk scores to individuals. Documents the Guardian reviewed and wrote about in November show that Voyager Analytics claimed it could use AI to analyse social media profiles to detect emerging threats based on a person's friends, groups, posts and more. In a case study showing how Voyager's software could be used to detect people who "most fully identify with a stance or any given topic," the company looked at the ways it would have analysed the social media presence of Adam Alsahli, who was killed last year while attempting to attack the Corpus Christi naval base in Texas. From the New York Times: "In 2018, thousands of Google employees signed a letter protesting the company's involvement in Project Maven, a military program that uses artificial intelligence to interpret video images and could be used to refine the targeting of drone strikes. Now, as Google positions cloud computing as a key part of its future, the bid for the new Pentagon contract could test the boundaries of those AI principles, which have set it apart from other tech giants that routinely seek military and intelligence work."


TechScape: can AI really predict crime?

The Guardian

In 2011, the Los Angeles police department rolled out a novel approach to policing called Operation Laser. Laser – which stood for Los Angeles Strategic Extraction and Restoration – was the first predictive policing programme of its kind in the US, allowing the LAPD to use historical data to predict with laser precision (hence the name) where future crimes might be committed and who might commit them. But it was all but precise. The programme used historical crime data like arrests, calls for service, field interview cards – which police filled out with identifying information every time they stopped someone regardless of the reason – and more to map out "problem areas" for officers to focus their efforts on or assign criminal risk scores to individuals. Information collected during these policing efforts was fed into computer software that further helped automate the department's crime-prediction efforts.


Why are so many scummy, scammy AI companies thriving?

#artificialintelligence

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. After about the fifth or sixth time you read an article about a Black man being wrongfully arrested due to faulty facial recognition AI, you start to wonder how nobody seems to be doing anything to stop this from happening. Sure, whenever something goes wrong, the company behind the software is always working to improve results and the law enforcement agency using the software is always reviewing procedures to ensure this doesn't happen again. It seems like a day doesn't go by where a law enforcement agency isn't exposed for misusing facial recognition or predictive-policing systems.


The benefits of facial recognition AI are being wildly overstated

#artificialintelligence

Facial recognition technology has run amok across the globe. In the US it continues to perpetuate at an alarming rate despite bipartisan push-back from politicians and several geographical bans. Even China's government has begun to question whether there's enough benefit to the use of ubiquitous surveillance tech to justify the utter destruction of public privacy. The truth of the matter is that facial recognition technology serves only two legitimate purposes: access control and surveillance. And, far too often, the people developing the technology aren't the ones who ultimately determine how it's used. Most decent, law-abiding citizens don't mind being filmed in public and, to a certain degree, would tend to take no exception to the use of facial recognition technology in places where it makes sense.


Profiling The Attacker: Using Natural Language Processing To Predict Crime - James Stevenson

#artificialintelligence

What does Minority Report, Black Mirror, and 1984 all have in common? Well, turn up to the talk to find out. On a day to day basis we countlessly write notes, send messages and respond to emails. The question is: what does what we write actually show about us, and how can we use the meaning behind these pieces of text to predict crimes and attacks. This talk delves into just this - how machine learning, and specifically natural language processing and sentiment analysis, can be used to predict crime and security attacks.